Robust Large Margin Deep Neural Networks
نویسندگان
چکیده
منابع مشابه
Large Margin Deep Neural Networks: Theory and Algorithms
Deep neural networks (DNN) have achieved huge practical success in recent years. However, its theoretical properties (in particular generalization ability) are not yet very clear, since existing error bounds for neural networks cannot be directly used to explain the statistical behaviors of practically adopted DNN models (which are multi-class in their nature and may contain convolutional layer...
متن کاملLarge Margin Deep Networks for Classification
We present a formulation of deep learning that aims at producing a large margin classifier. The notion of margin, minimum distance to a decision boundary, has served as the foundation of several theoretically profound and empirically successful results for both classification and regression tasks. However, most large margin algorithms are applicable only to shallow models with a preset feature ...
متن کاملLarge-Margin Classification in Infinite Neural Networks
We introduce a new family of positive-definite kernels for large margin classification in support vector machines (SVMs). These kernels mimic the computation in large neural networks with one layer of hidden units. We also show how to derive new kernels, by recursive composition, that may be viewed as mapping their inputs through a series of nonlinear feature spaces. These recursively derived k...
متن کاملTowards Robust Deep Neural Networks with BANG
Machine learning models, including state-of-the-art deep neural networks, are vulnerable to small perturbations that cause unexpected classification errors. This unexpected lack of robustness raises fundamental questions about their generalization properties and poses a serious concern for practical deployments. As such perturbations can remain imperceptible – commonly called adversarial exampl...
متن کاملMARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis
With the growing complexity of machine learning techniques, understanding the functioning of black-box models is more important than ever. A recently popular strategy towards interpretability is to generate explanations based on examples – called influential samples – that have the largest influence on the model’s observed behavior. However, for such an analysis, we are confronted with a pletho...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Signal Processing
سال: 2017
ISSN: 1053-587X,1941-0476
DOI: 10.1109/tsp.2017.2708039